05-08-2025
Data limitations hindering AI adoption: insights from financial services
You've likely heard the adage that 'You can't have an AI strategy without a data strategy.' But what does that mean in practice? As someone who regularly explores this question with data leaders, I've gained insights into the challenges organizations face when implementing AI and how they're overcoming them.
Enterprise data management is a daunting task, even before factoring in the levels of accuracy, completeness, and consistency required to bring reliable AI applications to market. And challenges are widespread. According to a survey by EY, only 36% of senior leaders report that their organizations are investing fully in data infrastructure—covering quality, accessibility, and governance of data—to support AI initiatives at scale. Another EY survey found that 67% of leaders say data infrastructure is holding them back.
I, too, am no stranger to how data limitations impact enterprise initiatives. Inaccurate and incomplete data impacts AI-driven marketing campaigns—from misdirecting target marketing efforts to potentially perpetuating societal bias. However, I've found that the financial services industry has been setting a great example on how to embrace data readiness, paving the way for efficient AI adoption.
The conversations reveal a common understanding: Successful AI implementation doesn't depend on good technology; it requires strong data strategies and judgment to know when and where to deploy AI effectively.
FINANCIAL SERVICES MAKING DATA 'AI-READY'
Data readiness is often misunderstood as a technical checklist, but in reality, it requires a fundamental shift in how we think about data. Historically, financial services have relied on data labels and tags that seemed self-evidently correct for reporting purposes. However, AI demands a deeper level of scrutiny—moving beyond surface accuracy to ensure that the data output truly reflects the nuances needed for machine learning models to perform effectively.
One of the most striking examples I've encountered involved a financial services company's three-year effort to build a discriminative AI model. Despite having what they thought was a well-labeled dataset, early attempts resulted in poor accuracy. It wasn't until consultation with their algorithm team that they uncovered a crucial gap: The data labels were accurate for reporting purposes, but failed to account for variability in market conditions and trade parameters.
To address this, the team applied techniques like principal component analysis (PCA) and interquartile range (IQR) filtering to reduce noise in their data. They also created new features specifically designed to filter training datasets, identifying and isolating trades that fit typical implementation patterns. These methods transformed data from being merely correct to truly AI-ready—fit for the purpose of building reliable models.
This example highlights the importance of viewing data as more than static resources. Instead, it's something to be actively curated and continuously refined. AI readiness isn't about having a perfect dataset; it's about understanding the imperfections and compensating for them through thoughtful design and ongoing iteration.
TACKLING DATA GAPS WITH TRANSPARENCY AND COLLABORATION
Addressing data gaps is rarely a solitary effort. Many leaders I've spoken to have embraced transparency and collaboration, which are invaluable mechanisms for enhancing data quality and driving meaningful AI outcomes. This approach is one that we should all adopt as we embark on our AI journeys.
In my experience, the most successful organizations have implemented transparency frameworks around AI initiatives, and they typically center around three pillars:
1. Data Disclosure
Leading firms openly share what data was used in a model and, more importantly, what data they wish they had. Being upfront about these gaps can lead to valuable feedback from clients and colleagues, helping to identify areas where additional data collection or adjustments are needed.
2. Feature Transparency
Forward-thinking organizations disclose both the features used in their models as well as those that were intentionally excluded. This approach sparks valuable discussions with stakeholders who may even identify features that hadn't been considered, leading to significant model improvements.
3. Model Selection Rationale
AI trailblazers explain their reasoning behind the models they use, whether it's an XGBoost, random forest, or a less transparent neural network. This clarity builds trust with both internal teams and external clients, ensuring they feel included in the process rather than sidelined by it.
When implemented, these principles help foster a culture of openness that addresses skepticism head-on. Organizations embracing these approaches can see more success as they create an environment where data quality issues are identified early and addressed collaboratively. This structured transparency makes data more accessible and understandable across the enterprise, bridging the gap between technical teams and business stakeholders.
A FRAMEWORK FOR THOUGHTFUL AI DEPLOYMENT
When it comes to deploying AI, whether in financial services or another industry, one of the most important insights I've gleaned is that just because you can build a model doesn't mean you should deploy it. Thoughtful AI deployment isn't about rushing to apply AI to every problem—it's about making purposeful decisions that consider both the value AI can add and its associated risks.
As one senior data leader emphasized in a recent conversation, this starts with a clear understanding of the context in which a model will be used. They shared how their team once spent months manually tagging data to ensure a generative AI model could provide accurate and contextualized responses. This process was labor-intensive but critical to building trust in the model's outputs, particularly in a regulated industry like finance, where mistakes carry significant reputational and compliance risks.
Another key consideration is knowing when AI is not the right tool for the job. Sometimes, a simpler solution, like a Tableau dashboard or a regression model, is a better fit. Being transparent about where your organization chooses not to deploy AI—and why—is just as important as highlighting successes. This openness builds trust, both within your organization and with clients.
AI'S FUTURE IN FINANCIAL SERVICES
Ultimately, deploying AI thoughtfully requires good judgment, and that judgment must be guided by the principle that AI should enhance human decision-making rather than replace it. Financial services are on the fast track to adoption, and by focusing on thoughtful deployment, transparency, and collaboration, we can unlock AI's potential while ensuring it remains a tool to empower human decision-making.